Iterative soft-thresholding converges linearly
نویسندگان
چکیده
In this article, the convergence of the often used iterative softthresholding algorithm for the solution of linear operator equations in infinite dimensional Hilbert spaces is analyzed in detail. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the so-called finite basis injectivity property. This result quantifies the experience that the iterative soft-thresholding converges very slow because it is argued that the constant of the linear rate is very close to one. Moreover it is shown that the constants can be calculated explicitly if some knowledge on the operator is available (i.e. for compact operators). The proof is based on gradient descent arguments for a generalized projected gradient method and utilizes a new technique, called the Bregman-Taylor distance.
منابع مشابه
Linear convergence of iterative soft-thresholding
In this article, the convergence of the often used iterative softthresholding algorithm for the solution of linear operator equations in infinite dimensional Hilbert spaces is analyzed in detail. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. The analysis bases on new techniques like the Bregman-Taylor distance. As main result...
متن کاملConvergence of the Generalized Alternating Projection Algorithm for Compressive Sensing
The convergence of the generalized alternating projection (GAP) algorithm is studied in this paper to solve the compressive sensing problem y = Ax + . By assuming that AA> is invertible, we prove that GAP converges linearly within a certain range of step-size when the sensing matrix A satisfies restricted isometry property (RIP) condition of δ2K , where K is the sparsity of x. The theoretical a...
متن کاملLiterature Review on Sparse Optimization
Some thoughts about algorithms for minimizing l and TV norms for signal and image recovery. In particular, I put the emphasis on the connections between Lagrangian and l constrained optimizations, and between analysis and synthesis priors. 1 Lagrangian and Constrained l1 Pursuits Lagrangian l pursuit reads ã = argmin b 1 2 ‖f − Φ∗b‖2 + T‖b‖1. (1) It is a Lagrangian formulation of the following ...
متن کاملConvergence of an inertial proximal method for l1-regularized least-squares
A fast, low-complexity, algorithm for solving the `1-regularized least-squares problem is devised and analyzed. Our algorithm, which we call the Inertial Iterative Soft-Thresholding Algorithm (I-ISTA), incorporates inertia into a forward-backward proximal splitting framework. We show that I-ISTA has a linear rate of convergence with a smaller asymptotic error constant than the well-known Iterat...
متن کاملAccelerated Projected Gradient Method for Linear Inverse Problems with Sparsity Constraints
Regularization of ill-posed linear inverse problems via l1 penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an l1 penalized functional is via an iterative soft-thresholding algorithm. We propose an alternative implementation to l1-constraints, using a gradient method, with projection on l1-balls. The correspondin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007